143 research outputs found

    RootPath: Root Cause and Critical Path Analysis to Ensure Sustainable and Resilient Consumer-Centric Big Data Processing under Fault Scenarios

    Get PDF
    The exponential growth of consumer-centric big data has led to increased concerns regarding the sustainability and resilience of data processing systems, particularly in the face of fault scenarios. This paper presents an innovative approach integrating Root Cause Analysis (RCA) and Critical Path Analysis (CPA) to address these challenges and ensure sustainable, resilient consumer-centric big data processing. The proposed methodology enables the identification of root causes behind system faults probabilistically, implementing Bayesian networks. Furthermore, an Artificial Neural Network (ANN)-based critical path method is employed to identify the critical path that causes high makespan in MapReduce workflows to enhance fault tolerance and optimize resource allocation. To evaluate the effectiveness of the proposed methodology, we conduct a series of fault injection experiments, simulating various real-world fault scenarios commonly encountered in operational environments. The experiment results show that both models perform very well with high accuracies, 95%, and 98%, respectively, enabling the development of more robust and reliable consumer-centric systems

    Federated-ANN based Critical Path Analysis and Health Recommendations for MapReduce Workflows in Consumer Electronics Applications

    Get PDF
    Although much research has been done to improve the performance of big data systems, predicting the performance degradation of these systems quickly and efficiently remains a significant challenge. Unfortunately, the complexity of big data systems is so vast that predicting performance degradation ahead of time is quite tricky. Long execution time is often discussed in the context of performance degradation of big data systems. This paper proposes MrPath, a Federated AI-based critical path analysis approach for holistic performance prediction of MapReduce workflows for consumer electronics applications while enabling root-cause analysis of various types of faults. We have implemented a federated artificial neural network (FANN) to predict the critical path in a MapReduce workflow. After the critical path components (e.g., mapper1, reducer2) are predicted/detected, root cause analysis uses user-defined functions (UDF) to pinpoint the most likely reasons for the observed performance problems. Finally, health node classification is performed using an ANN-based Self-Organising Map (SOM). The results show that the AI-based critical path analysis method can significantly illuminate the reasons behind the long execution time in big data systems

    Container-based load balancing for energy efficiency in software-defined edge computing environment

    Get PDF
    The workload generated by the Internet of Things (IoT)-based infrastructure is often handled by the cloud data centers (DCs). However, in recent time, an exponential increase in the deployment of the IoT-based infrastructure has escalated the workload on the DCs. So, these DCs are not fully capable to meet the strict demand of IoT devices in regard to the lower latency as well as high data rate while provisioning IoT workloads. Therefore, to reinforce the latency-sensitive workloads, an intersection layer known as edge computing has successfully balanced the entire service provisioning landscape. In this IoT-edge-cloud ecosystem, large number of interactions and data transmissions among different layer can increase the load on underlying network infrastructure. So, software-defined edge computing has emerged as a viable solution to resolve these latency-sensitive workload issues. Additionally, energy consumption has been witnessed as a major challenge in resource-constrained edge systems. The existing solutions are not fully compatible in Software-defined Edge ecosystem for handling IoT workloads with an optimal trade-off between energy-efficiency and latency. Hence, this article proposes a lightweight and energy-efficient container-as-a-service (CaaS) approach based on the software-define edge computing to provision the workloads generated from the latency-sensitive IoT applications. A Stackelberg game is formulated for a two-period resource allocation between end-user/IoT devices and Edge devices considering the service level agreement. Furthermore, an energy-efficient ensemble for container allocation, consolidation and migration is also designed for load balancing in software-defined edge computing environment. The proposed approach is validated through a simulated environment with respect to CPU serve time, network serve time, overall delay, lastly energy consumption. The results obtained show the superiority of the proposed in comparison to the existing variants

    Deep neuro‐fuzzy approach for risk and severity prediction using recommendation systems in connected health care

    Get PDF
    Internet of Things (IoT) and Data science have revolutionized the entire technological landscape across the globe. Because of it, the health care ecosystems are adopting the cutting‐edge technologies to provide assistive and personalized care to the patients. But, this vision is incomplete without the adoption of data‐focused mechanisms (like machine learning, big data analytics) that can act as enablers to provide early detection and treatment of patients even without admission in the hospitals. Recently, there has been an increasing trend of providing assistive recommendation and timely alerts regarding the severity of the disease to the patients. Even, remote monitoring of the present day health situation of the patient is possible these days though the analysis of the data generated using IoT devices by doctors. Motivated from these facts, we design a health care recommendation system that provides a multilevel decision‐making related to the risk and severity of the patient diseases. The proposed systems use an all‐disease classification mechanism based on convolutional neural networks to segregate different diseases on the basis of the vital parameters of a patient. After classification, a fuzzy inference system is used to compute the risk levels for the patients. In the last step, based on the information provided by the risk analysis, the patients are provided with the potential recommendation about the severity staging of the associated diseases for timely and suitable treatment. The proposed work has been evaluated using different datasets related to the diseases and the outcomes seem to be promising

    A Comprehensive Review on Fog Removal Techniques in Single Images

    Get PDF
    Haze is framed because of the two major phenomena that are nature constriction and the air light. This paper introduces an audit on the diverse methods to expel fog from pictures caught in murky environment to recuperate a superior and enhanced nature of murkiness free pictures. Pictures of open air scenes regularly contain corruption because of cloudiness, bringing about difference decrease and shading blurring. Haze evacuation overall called perceivability rebuilding alludes to various frameworks that assume to reduce or empty the corruption that have happened while the computerized picture was being gained. This paper is an audit on the different mist evacuation calculations. Cloudiness evacuation techniques recuperate the shading and differentiation of the scene.In this paper, different haze evacuation methods have been examined. DOI: 10.17762/ijritcc2321-8169.15052

    Service vs Protection: A Bayesian Learning Approach for Trust Provisioning in Edge of Things Environment

    Get PDF
    Edge of Things (EoT) technology enables end-users participation with smart-sensors and mobile devices (such as smartphones, wearable devices) to the smart devices across the smart city. Trust management is the main challenge in EoT infrastructure to consider the trusted participants. The Quality of Service (QoS) is highly affected by malicious users with fake or altered data. In this paper, a Robust Trust Management (RTM) scheme is designed based on Bayesian learning and collaboration filtering. The proposed RTM model is regularly updated after a specific interval with the significant decay value to the current calculated scores to update the behavior changes quickly. The dynamic characteristics of edge nodes are analyzed with the new probability score mechanism from recent services’ behavior. The performance of the proposed trust management scheme is evaluated in a simulated environment. The percentage of collaboration devices are tuned as 10%, 50% and 100%. The maximum accuracy of 99.8% is achieved from the proposed RTM scheme. The experimental results demonstrate that the RTM scheme shows better performance than the existing techniques in filtering malicious behavior and accuracy

    Cross-chain Transaction Validation using Lock-and-Key Method for Multi-System Blockchain

    Get PDF
    Blockchains have profoundly impacted finance and administration, but there are several issues with the current blockchain platforms, including a lack of system interoperability. Currently used blockchain application platforms only work within their networks. Although the underlying concept of all blockchain networks is mainly similar, it involves centralised third-party mediators to transact from other blockchain networks. The current third-party intermediates establish security and trust by keeping track of “account balances” and attesting to the validity of transactions in a centralised ledger. The lack of sufficient inter-blockchain connectivity hinders the mainstream adoption of blockchain. Blockchain technology may be a solid solution for many systems if it grows and works with other systems. For the multi-system blockchain concept to materialise, a mechanism that would connect and communicate with the blockchain systems of various entities in a distributed manner (without any intermediary) while maintaining the property of trust and integrity established by individual blockchains is required. Several methods for verifying cross-chain transactions have been explored in this paper among various blockchains. The efficient verification of cross-chain transactions faces many difficulties, and current research has yet to scratch the surface. In addition to summarising and categorising these strategies, the report also suggests a novel mechanism that gets beyond the existing drawbacks

    DaaS: Dew Computing as a Service for Intelligent Intrusion Detection in Edge-of-Things Ecosystem

    Get PDF
    Edge of Things (EoT) enables the seamless transfer of services, storage, and data processing from the cloud layer to edge devices in a large-scale distributed Internet of Things (IoT) ecosystems (e.g., Industrial systems). This transition raises the privacy and security concerns in the EoT paradigm distributed at different layers. Intrusion detection systems (IDSs) are implemented in EoT ecosystems to protect the underlying resources from attackers. However, the current IDSs are not intelligent enough to control the false alarms, which significantly lower the reliability and add to the analysis burden on the IDSs. In this article, we present a Dew Computing as a Service (DaaS) for intelligent intrusion detection in EoT ecosystems. In DaaS, a deep learning-based classifier is used to design an intelligent alarm filtration mechanism. In this mechanism, the filtration accuracy is improved (or sustained) by using deep belief networks. In the past, the cloud-based techniques have been applied for offloading the EoT tasks, which increases the middle layer burden and raises the communication delay. Here, we introduce the dew computing features that are used to design the smart false alarm reduction system. DaaS, when experimented in a simulated environment, reflects lower response time to process the data in the EoT ecosystem. The revamped DBN model achieved the classification accuracy up to 95%. Moreover, it depicts a 60% improvement in the latency and 35% workload reduction of the cloud servers as compared to edge IDS

    Intrusion Detection in Critical SD-IoT Ecosystem

    Get PDF
    The Internet of Things (IoT) connects physical objects with intelligent decision-making support to exchange information and enable various critical applications. The IoT enables billions of devices to connect to the Internet, thereby collecting and exchanging real-time data for intelligent services. The complexity of IoT management makes it difficult to deploy and manage services dynamically. Thus, in recent times, Software Defined Network (SDN) has been widely adopted in IoT service management to provide dynamic and adaptive capabilities to the traditional IoT ecosystem. This has resulted in the evolution of a new paradigm known as Software-defined IoT (SD-IoT). Although there are several benefits of SD-IoT, it also opens new frontiers for attackers to introduce attacks and intrusions. Specifically, it becomes challenging working in a critical IoT environment where any delay or disruption caused by an intruder can be life-threatening or can cause significant destruction. However, given the flexibility of SDN, it is easier to deploy different intrusion detection systems that can detect attacks or anomalies promptly. Thus, in this paper, we have deployed a hybrid architecture that allows monitoring, analysis, and detection of attacks and anomalies in the SD-IoT ecosystem. In this work, we have considered three scenarios, a) denial of services, b) distributed denial of service, and c) packet fragmentation. The work is validated using simulated experiments performed using SNORT deployed on the Mininet platform for three scenarios
    • 

    corecore